Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
نویسندگان
چکیده
The right to erasure requires removal of a user's information from data held by organizations, with rigorous interpretations extending downstream products such as learned models. Retraining scratch the particular omitted fully removes its influence on resulting model, but comes high computational cost. Machine "unlearning" mitigates cost incurred full retraining: instead, models are updated incrementally, possibly only requiring retraining when approximation errors accumulate. Rapid progress has been made towards privacy guarantees indistinguishability unlearned and retrained models, current formalisms do not place practical bounds computation. In this paper we demonstrate how an attacker can exploit oversight, highlighting novel attack surface introduced machine unlearning. We consider aiming increase removal. derive empirically investigate poisoning certified unlearning where strategically designed training triggers complete removed.
منابع مشابه
Certified Defenses for Data Poisoning Attacks
Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approxi...
متن کاملSome Submodular Data-Poisoning Attacks on Machine Learners
The security community has long recognized the threats of data-poisoning attacks (a.k.a. causative attacks) on machine learning systems [1–6, 9, 10, 12, 16], where an attacker modifies the training data, so that the learning algorithm arrives at a “wrong” model that is useful to the attacker. To quantify the capacity and limits of such attacks, we need to know first how the attacker may modify ...
متن کاملμchain: How to Forget without Hard Forks
In this paper, we explore an idea of making (proof-of-work) blockchains mutable. We propose and implement μchain, a mutable blockchain, that enables modifications of blockchain history. Blockchains are, by common definition, distributed and immutable data structures that store a history of events, such as transactions in a digital currency system. While the very idea of mutable event history ma...
متن کاملManipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms. In this paper, we perform the first systematic study of poisoning attacks and their countermeasures for linear regression models. In poisoning attacks, attackers deliberately influence the training data to manipulate the...
متن کاملAttacks on Virtual Machine Emulators
As virtual machine emulators have become commonplace in the analysis of malicious code, malicious code has started to fight back. This paper describes known attacks against the most widely used virtual machine emulators (VMware and VirtualPC). This paper also demonstrates newly discovered attacks on other virtual machine emulators (Bochs, Hydra, QEMU, and Xen), and describes how to defend again...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2022
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v36i7.20736